AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Multi-stage Training

# Multi-stage Training

Olmo 2 0425 1B
Apache-2.0
OLMo 2 1B is the smallest model in the open language model series released by the Allen Institute for Artificial Intelligence, based on OLMo-mix-1124 pre-training and further trained with the Dolmino-mix-1124 dataset during the intermediate training phase.
Large Language Model Transformers English
O
allenai
13.31k
45
Pegasus Indian Legal
MIT
This model is a legal text summarization model fine-tuned on Indian legal datasets based on legal-pegasus
Large Language Model Transformers
P
akhilm97
104
2
Colossal LLaMA 2 7b Base
An open-source bilingual Chinese-English large language model based on LLaMA-2, continuously pre-trained on approximately 8.5 billion tokens, supporting a context window of 4096 tokens.
Large Language Model Transformers Supports Multiple Languages
C
hpcai-tech
147
76
Indobert Base P1
MIT
IndoBERT is an advanced Indonesian language model based on BERT, trained with Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) objectives.
Large Language Model Other
I
indobenchmark
261.95k
25
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase